Goto

Collaborating Authors

 short-term memory and learning-to-learn


Long short-term memory and Learning-to-learn in networks of spiking neurons

Neural Information Processing Systems

Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain. But computing and learning capabilities of RSNN models have remained poor, at least in comparison with ANNs. We address two possible reasons for that. One is that RSNNs in the brain are not randomly connected or designed according to simple rules, and they do not start learning as a tabula rasa network. Rather, RSNNs in the brain were optimized for their tasks through evolution, development, and prior experience. Details of these optimization processes are largely unknown. But their functional contribution can be approximated through powerful optimization methods, such as backpropagation through time (BPTT). A second major mismatch between RSNNs in the brain and models is that the latter only show a small fraction of the dynamics of neurons and synapses in the brain. We include neurons in our RSNN model that reproduce one prominent dynamical process of biological neurons that takes place at the behaviourally relevant time scale of seconds: neuronal adaptation.


Reviews: Long short-term memory and Learning-to-learn in networks of spiking neurons

Neural Information Processing Systems

Summary Recurrent networks of leaky integrate-and-fire neurons with (spike frequency) adaptation are trained with backpropagation-through-time (adapted to spiking neurons) to perform digit recognition (temporal MNIST), speech recognition (TIMIT), learning to learn simple regression tasks and learning to find a goal location in simple navigation tasks. The performances on temporal MNIST and TIMIT are similar to the one of LSTM-networks. The simple regression and navigation task demonstrate that connection weights exist that allow to solve simple tasks using the short-term memory of spiking neurons with adaptation, without the need of ongoing synaptic plasticity. Quality The selection of tasks is interesting, the results are convincing and the supplementary information seems to provide sufficient details to reproduce them. But the writing could be improved significantly.


Long short-term memory and Learning-to-learn in networks of spiking neurons

Bellec, Guillaume, Salaj, Darjan, Subramoney, Anand, Legenstein, Robert, Maass, Wolfgang

Neural Information Processing Systems

Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain. But computing and learning capabilities of RSNN models have remained poor, at least in comparison with ANNs. We address two possible reasons for that. One is that RSNNs in the brain are not randomly connected or designed according to simple rules, and they do not start learning as a tabula rasa network. Rather, RSNNs in the brain were optimized for their tasks through evolution, development, and prior experience.